Abstract: Now-a-days there is lots of security that promises to provide excellent security, but in spite of that many of them fail to deliver when it comes to real time testing. Each authorized party easily identified the data and message. We use the many ways needed to prevent. So encrypt all data and message only use the authority person knowing the data and message. One or more persons communicate and the computation of participants. So secure the data and message very secretly. To prevent use neural network methods every data give the secret key and give some weighted also when it sends the message to the server. In this paper, we present a privacy preserving algorithm for the neural network learning, when the dataset in arbitrary partitioned between the two parties. We show that our algorithm in very secure and leak no knowledge about the other part’s data. We demonstrate the efficiency of our algorithm by experiment on real world data. The computation and communication costs on each party minimal and independent to the number of participants. To support flexible operations over cipher texts, Numerical analysis and experiments on commodity show that our scheme is secure, efficient and accurate. The neural networks are distributed between two parties, which are quite common nowadays. Existing cryptographic approaches such as secure scalar product protocol provide a secure way for neural network learning when the training data set is vertically partitioned. In this algorithm after each round of training both the parties just hold the random shares of weights and not the exact weights, this guarantees more security and privacy against the intrusion by the other party.

Keywords: Back Propagation, Neural network, Data Security, Client and Server.